-
Notifications
You must be signed in to change notification settings - Fork 648
[manila-csi-plugin] Seed fsName to ceph-csi's node plugin #2994
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
This option is now mandatory to use when there are multiple CephFS file systems in the ceph cluster. Without this, ceph won't be able to find the (sub)volume to mount.
Hi @gouthampacha - Thank you for proposing this. I have tested this patch in our environment but I am still running into the same issue. I've detailed below the steps I've taken so please do correct me if I'm doing something wrong / misunderstanding. Cheers,
$ openstack share show cephfs-upstream -c properties -f value
{'__mount_options': 'fs=my_fs'}
$ openstack share access create ${SHARE_ID} cephx ${SHARE_NAME} --access-level rw
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: manila-preprovisioned-pv
spec:
csi:
driver: cephfs.manila.csi.openstack.org
volumeHandle: my-preprovisioned-manila-vol
volumeAttributes:
shareID: "**redacted**" # openstack share id
shareAccessID: "**redacted**" # openstack share access list $SHARE_ID -c ID -f value
nodeStageSecretRef:
name: os-trustee
namespace: kube-system
nodePublishSecretRef:
name: os-trustee
namespace: kube-system
accessModes:
- ReadWriteMany
capacity:
storage: 1Gi
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-preprovisioned-manila-vol
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
volumeName: manila-preprovisioned-pv
storageClassName: ""
---
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
protocol: TCP
volumeMounts:
- mountPath: /var/lib/www/html
name: my-vol
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-preprovisioned-manila-vol
readOnly: false I can see in the nodeplugin logs that the fs is now correctly being pulled from the $ kubectl logs -n kube-system cern-magnum-openstack-manila-csi-nodeplugin-m27dm -c cephfs-nodeplugin | grep fs_name
I0930 08:02:02.030147 1 cephfs.go:140] Found fs_name in share metadata: my_fs But looking at the pod which mounts this container the fs_name is not passed to the mount command: the only option passed is Warning FailedMount 20s kubelet MountVolume.MountDevice failed for volume "manila-preprovisioned-pv" : rpc error: code = Internal desc = an error (exit status 1) occurred while running ceph-fuse args: [/var/lib/kubelet/plugins/kubernetes.io/csi/cephfs.manila.csi.openstack.org/xxx/globalmount -m **monitors** -c /etc/ceph/ceph.conf -n client.cephfs-upstream --keyfile=***stripped*** -r /volumes/_nogroup/yyy/zzz -o noatime] stderr: 2025-09-30T08:04:10.418+0000 7efeea243580 -1 init, newargv = 0x56072b9e69f0 newargc=15
2025-09-30T08:04:10.418+0000 7efeea243580 -1 init, args.argv = 0x56072bb3bc00 args.argc=4
ceph-fuse[773512]: starting ceph client
ceph-fuse[773512]: ceph mount failed with (2) No such file or directory |
What this PR does / why we need it:
The CephFS partner plugin now requires a parameter
called 'fsName' to explicitly specify a CephFS filesystem
name to locate subvolumes on a Ceph cluster with
multiple CephFS filesystems. This PR reads the value
of the 'fsName' from share metadata in Manila.
Which issue this PR fixes(if applicable):
fixes #2992
fixes #2986
Release note: